Skip to content

tests: refactor task-list-tests (direct functional calls + happy path)#305

Open
saathviksheerla wants to merge 4 commits intoopenml:mainfrom
saathviksheerla:refactor/task-list-tests-direct-calls
Open

tests: refactor task-list-tests (direct functional calls + happy path)#305
saathviksheerla wants to merge 4 commits intoopenml:mainfrom
saathviksheerla:refactor/task-list-tests-direct-calls

Conversation

@saathviksheerla
Copy link
Copy Markdown
Contributor

Description

Fixes #300

  • Migration test already covered in tasks_migration_test.py
  • happy path test of the python endpoint by callin the API
  • have tests on input validation that call the Python API. Specifically for input validation that are handled through FastAPI+Pydantic's function type annotation
  • the other tests are direct functional tests which test the behavior of the functions, including correct processing of parameters and raising of errors, through calling the Python function which implements the endpoint correctly, as suggested.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 1, 2026

Walkthrough

Tests for the task-list functionality were refactored to call the underlying async data-access function list_tasks(...) directly using an expdb_test: AsyncConnection fixture instead of relying solely on end-to-end HTTP assertions. Existing filter, pagination, and number-of-instances range checks were moved or reimplemented as direct-call assertions on returned task objects; HTTP tests were kept/added for input validation, pagination clamping, schema errors (422), and mapping NoResultsError to a 404 problem response.

Possibly related PRs

Suggested labels

tests

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: refactoring task-list tests to use direct functional calls and happy path testing, which aligns with the actual changeset.
Description check ✅ Passed The description is directly related to the changeset, referencing issue #300 and detailing the specific testing approach changes (direct calls, happy path, input validation) implemented in the test refactoring.
Linked Issues check ✅ Passed The pull request fully implements all four objectives from issue #300: migration test coverage [#300], happy-path API test [#300], input-validation tests via FastAPI+Pydantic [#300], and direct functional tests for parameter processing and error handling [#300].
Out of Scope Changes check ✅ Passed All changes are scoped to refactoring task-list tests as specified in issue #300; no unrelated modifications to production code or other test modules are present.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.14%. Comparing base (3f1a7ac) to head (62dec6d).

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #305      +/-   ##
==========================================
+ Coverage   93.04%   93.14%   +0.09%     
==========================================
  Files          71       71              
  Lines        2991     3003      +12     
  Branches      221      220       -1     
==========================================
+ Hits         2783     2797      +14     
  Misses        152      152              
+ Partials       56       54       -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@saathviksheerla saathviksheerla marked this pull request as ready for review April 1, 2026 03:50
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The test_list_tasks_negative_pagination_safely_clamped docstring describes potential future behavior (422 from Pydantic) that is not asserted in the test, which can become misleading over time; consider either removing the forward-looking note or encoding that expectation explicitly (e.g., via parametrization) when/if the behavior changes.
  • Now that the module mixes HTTP-level tests (py_api) and direct function-call tests (list_tasks + expdb_test), consider grouping or naming the tests to make the separation of concerns clearer (e.g., split into separate files or use naming prefixes for API vs. direct-call tests).
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `test_list_tasks_negative_pagination_safely_clamped` docstring describes potential future behavior (422 from Pydantic) that is not asserted in the test, which can become misleading over time; consider either removing the forward-looking note or encoding that expectation explicitly (e.g., via parametrization) when/if the behavior changes.
- Now that the module mixes HTTP-level tests (`py_api`) and direct function-call tests (`list_tasks` + `expdb_test`), consider grouping or naming the tests to make the separation of concerns clearer (e.g., split into separate files or use naming prefixes for API vs. direct-call tests).

## Individual Comments

### Comment 1
<location path="tests/routers/openml/task_list_test.py" line_range="241-244" />
<code_context>
+async def test_list_tasks_no_results(payload: dict[str, Any], expdb_test: AsyncConnection) -> None:
</code_context>
<issue_to_address>
**suggestion (testing):** Consider keeping an HTTP-level `py_api` test for 404 NoResultsError mapping in addition to the direct-call exception test.

The previous version validated the full HTTP behavior (404 status, `application/problem+json` content type, and inclusion of `NoResultsError.uri` in the body). The new version only confirms that `list_tasks` raises the correct Python exception, but not that it’s mapped to the expected HTTP response. Please either reintroduce an HTTP-level test (e.g. `test_list_tasks_no_results_http`) or add a small `py_api` test that asserts 404 + `problem+json` + error type, and keep this direct-call test as-is.

Suggested implementation:

```python
@pytest.mark.parametrize(
    "payload",
    [
        # A tag that does not exist
        {"tag": "nonexistent-tag-123"},
        # A task ID that does not exist
        {"task_id": 999999999},
        # A data_name that does not match any task
        {"data_name": "nonexistent-dataset-123"},
    ],
    ids=["bad_tag", "bad_task_id", "bad_data_name"],
)
async def test_list_tasks_no_results(payload: dict[str, Any], expdb_test: AsyncConnection) -> None:
    """Filters matching nothing raise NoResultsError at the Python API level."""
    # Direct-call test that ensures the router logic raises the domain exception.
    with pytest.raises(NoResultsError):
        await list_tasks(expdb_test, **payload)


@pytest.mark.parametrize(
    "payload",
    [
        {"tag": "nonexistent-tag-123"},
        {"task_id": 999999999},
        {"data_name": "nonexistent-dataset-123"},
    ],
    ids=["bad_tag", "bad_task_id", "bad_data_name"],
)
async def test_list_tasks_no_results_http(
    payload: dict[str, Any],
    async_client: AsyncClient,
) -> None:
    """Filters matching nothing are mapped to HTTP 404 problem+json with NoResultsError type."""
    response = await async_client.get("/openml/tasks", params=payload)

    assert response.status_code == status.HTTP_404_NOT_FOUND
    assert response.headers["content-type"].startswith("application/problem+json")

    body = response.json()
    # Error type should include the NoResultsError identifier/URI
    assert isinstance(body.get("type"), str)
    assert "NoResultsError" in body["type"]

```

To make this compile and follow your project’s conventions, you will likely need to:

1. Ensure imports at the top of `tests/routers/openml/task_list_test.py`:
   - `from httpx import AsyncClient` (or whatever async client type your `async_client` fixture uses).
   - `from starlette import status` or `from fastapi import status` (whichever you already use elsewhere).
   - `from ...routers.openml.task_list import list_tasks` (adjust module path to the actual router function).
   - `from ...exceptions import NoResultsError` (or your actual domain error module).
2. If your test client fixture is named differently (e.g. `client` instead of `async_client`) or is sync-only, update the `test_list_tasks_no_results_http` signature and call style accordingly.
3. If you already have shared “no-results” payload constants or fixtures in this test module, you may want to reuse them rather than hard-coding the example payloads in both parametrizations.
</issue_to_address>

### Comment 2
<location path="tests/routers/openml/task_list_test.py" line_range="86-94" />
<code_context>
+    ],
+    ids=["bad_limit_type", "bad_offset_type"],
+)
+async def test_list_tasks_invalid_pagination_type(
+    pagination_override: dict[str, Any], py_api: httpx.AsyncClient
+) -> None:
+    """Invalid pagination types return 422 Unprocessable Entity."""
+    response = await py_api.post(
+        "/tasks/list",
+        json={"pagination": pagination_override},
+    )
+    assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Strengthen the 422 tests by asserting on the validation error structure, not only the status code.

These tests only assert the 422 status. To better guard against regressions, also assert key fields in the error body (e.g. `detail[0]['loc']` includes `pagination.limit`/`pagination.offset` and the error type is `type_error.integer` or equivalent). This ensures the request fails for the intended validation reason, not some other 422 path introduced later.

Suggested implementation:

```python
@pytest.mark.parametrize(
    ("pagination_override", "expected_loc_suffix"),
    [
        ({"limit": "abc", "offset": 0}, ("pagination", "limit")),  # Invalid type
        ({"limit": 5, "offset": "xyz"}, ("pagination", "offset")),  # Invalid type
    ],
    ids=["bad_limit_type", "bad_offset_type"],
)

```

```python
async def test_list_tasks_invalid_pagination_type(
    pagination_override: dict[str, Any],
    expected_loc_suffix: tuple[str, str],
    py_api: httpx.AsyncClient,
) -> None:
    """Invalid pagination types return 422 Unprocessable Entity with expected validation errors."""
    response = await py_api.post(
        "/tasks/list",
        json={"pagination": pagination_override},
    )
    assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY

    body = response.json()
    # FastAPI / Pydantic-style validation error structure
    assert "detail" in body
    assert isinstance(body["detail"], list)
    assert body["detail"], "Expected at least one validation error"

    error = body["detail"][0]
    # Ensure we are failing specifically on the pagination field we overrode
    assert "loc" in error
    assert error["loc"][-2:] == list(expected_loc_suffix)

    # Allow for common integer type error codes across Pydantic versions
    assert error.get("type") in {"type_error.integer", "int_parsing", "int_type"}

async def test_list_tasks_default(py_api: httpx.AsyncClient) -> None:

```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +241 to +244
async def test_list_tasks_no_results(payload: dict[str, Any], expdb_test: AsyncConnection) -> None:
"""Filters matching nothing return 404 NoResultsError."""
response = await py_api.post("/tasks/list", json=payload)
assert response.status_code == HTTPStatus.NOT_FOUND
assert response.headers["content-type"] == "application/problem+json"
error = response.json()
assert error["type"] == NoResultsError.uri


@pytest.mark.parametrize(
"value",
["1...2", "abc"],
ids=["triple_dot", "non_numeric"],
)
async def test_list_tasks_invalid_range(value: str, py_api: httpx.AsyncClient) -> None:
"""Invalid number_instances format returns 422 Unprocessable Entity."""
response = await py_api.post("/tasks/list", json={"number_instances": value})
assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY
with pytest.raises(NoResultsError):
await list_tasks(pagination=Pagination(), expdb=expdb_test, **payload)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Consider keeping an HTTP-level py_api test for 404 NoResultsError mapping in addition to the direct-call exception test.

The previous version validated the full HTTP behavior (404 status, application/problem+json content type, and inclusion of NoResultsError.uri in the body). The new version only confirms that list_tasks raises the correct Python exception, but not that it’s mapped to the expected HTTP response. Please either reintroduce an HTTP-level test (e.g. test_list_tasks_no_results_http) or add a small py_api test that asserts 404 + problem+json + error type, and keep this direct-call test as-is.

Suggested implementation:

@pytest.mark.parametrize(
    "payload",
    [
        # A tag that does not exist
        {"tag": "nonexistent-tag-123"},
        # A task ID that does not exist
        {"task_id": 999999999},
        # A data_name that does not match any task
        {"data_name": "nonexistent-dataset-123"},
    ],
    ids=["bad_tag", "bad_task_id", "bad_data_name"],
)
async def test_list_tasks_no_results(payload: dict[str, Any], expdb_test: AsyncConnection) -> None:
    """Filters matching nothing raise NoResultsError at the Python API level."""
    # Direct-call test that ensures the router logic raises the domain exception.
    with pytest.raises(NoResultsError):
        await list_tasks(expdb_test, **payload)


@pytest.mark.parametrize(
    "payload",
    [
        {"tag": "nonexistent-tag-123"},
        {"task_id": 999999999},
        {"data_name": "nonexistent-dataset-123"},
    ],
    ids=["bad_tag", "bad_task_id", "bad_data_name"],
)
async def test_list_tasks_no_results_http(
    payload: dict[str, Any],
    async_client: AsyncClient,
) -> None:
    """Filters matching nothing are mapped to HTTP 404 problem+json with NoResultsError type."""
    response = await async_client.get("/openml/tasks", params=payload)

    assert response.status_code == status.HTTP_404_NOT_FOUND
    assert response.headers["content-type"].startswith("application/problem+json")

    body = response.json()
    # Error type should include the NoResultsError identifier/URI
    assert isinstance(body.get("type"), str)
    assert "NoResultsError" in body["type"]

To make this compile and follow your project’s conventions, you will likely need to:

  1. Ensure imports at the top of tests/routers/openml/task_list_test.py:
    • from httpx import AsyncClient (or whatever async client type your async_client fixture uses).
    • from starlette import status or from fastapi import status (whichever you already use elsewhere).
    • from ...routers.openml.task_list import list_tasks (adjust module path to the actual router function).
    • from ...exceptions import NoResultsError (or your actual domain error module).
  2. If your test client fixture is named differently (e.g. client instead of async_client) or is sync-only, update the test_list_tasks_no_results_http signature and call style accordingly.
  3. If you already have shared “no-results” payload constants or fixtures in this test module, you may want to reuse them rather than hard-coding the example payloads in both parametrizations.

Comment on lines +86 to +94
async def test_list_tasks_invalid_pagination_type(
pagination_override: dict[str, Any], py_api: httpx.AsyncClient
) -> None:
"""Invalid pagination types return 422 Unprocessable Entity."""
response = await py_api.post(
"/tasks/list",
json={"pagination": pagination_override},
)
assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Strengthen the 422 tests by asserting on the validation error structure, not only the status code.

These tests only assert the 422 status. To better guard against regressions, also assert key fields in the error body (e.g. detail[0]['loc'] includes pagination.limit/pagination.offset and the error type is type_error.integer or equivalent). This ensures the request fails for the intended validation reason, not some other 422 path introduced later.

Suggested implementation:

@pytest.mark.parametrize(
    ("pagination_override", "expected_loc_suffix"),
    [
        ({"limit": "abc", "offset": 0}, ("pagination", "limit")),  # Invalid type
        ({"limit": 5, "offset": "xyz"}, ("pagination", "offset")),  # Invalid type
    ],
    ids=["bad_limit_type", "bad_offset_type"],
)
async def test_list_tasks_invalid_pagination_type(
    pagination_override: dict[str, Any],
    expected_loc_suffix: tuple[str, str],
    py_api: httpx.AsyncClient,
) -> None:
    """Invalid pagination types return 422 Unprocessable Entity with expected validation errors."""
    response = await py_api.post(
        "/tasks/list",
        json={"pagination": pagination_override},
    )
    assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY

    body = response.json()
    # FastAPI / Pydantic-style validation error structure
    assert "detail" in body
    assert isinstance(body["detail"], list)
    assert body["detail"], "Expected at least one validation error"

    error = body["detail"][0]
    # Ensure we are failing specifically on the pagination field we overrode
    assert "loc" in error
    assert error["loc"][-2:] == list(expected_loc_suffix)

    # Allow for common integer type error codes across Pydantic versions
    assert error.get("type") in {"type_error.integer", "int_parsing", "int_type"}

async def test_list_tasks_default(py_api: httpx.AsyncClient) -> None:

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 210-213: The test currently skips validation if NumberOfInstances
is missing; update the assertion to first require that "NumberOfInstances" is
present in the qualities dict (qualities = {q["name"]: q["value"] for q in
task["quality"]}) and then assert that min_instances <=
float(qualities["NumberOfInstances"]) <= max_instances so the test fails when
the filter is absent and only passes when the value exists and lies within the
expected range.
- Around line 143-151: The test test_list_tasks_filter_task_id currently relies
on list_tasks defaulting status to active which makes the ID equality assertion
brittle; update the call to list_tasks in that test to pass an explicit status
argument (e.g., Status.active) alongside Pagination() and task_id to ensure the
filter is deterministic—locate the list_tasks invocation in
test_list_tasks_filter_task_id and add the status parameter using the project's
Status enum or equivalent.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 403743a0-a30b-428d-b187-a1445b1ae6b9

📥 Commits

Reviewing files that changed from the base of the PR and between 3f1a7ac and eb9c646.

📒 Files selected for processing (1)
  • tests/routers/openml/task_list_test.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
tests/routers/openml/task_list_test.py (1)

150-160: ⚠️ Potential issue | 🟡 Minor

Make the remaining direct filter tests explicit about status.

These tests still rely on the route default TaskStatusFilter.ACTIVE, so a fixture status change can fail them even when the filter under test still works. If the goal is to isolate task_type_id/tag/data_id/data_name processing, pass TaskStatusFilter.ALL explicitly the same way test_list_tasks_filter_task_id already does.

💡 Suggested change
 async def test_list_tasks_filter_type(expdb_test: AsyncConnection) -> None:
     """Filter by task_type_id returns only tasks of that type."""
-    tasks = await list_tasks(pagination=Pagination(), task_type_id=1, expdb=expdb_test)
+    tasks = await list_tasks(
+        pagination=Pagination(),
+        task_type_id=1,
+        status=TaskStatusFilter.ALL,
+        expdb=expdb_test,
+    )
@@
 async def test_list_tasks_filter_tag(expdb_test: AsyncConnection) -> None:
     """Filter by tag returns only tasks with that tag."""
-    tasks = await list_tasks(pagination=Pagination(), tag="OpenML100", expdb=expdb_test)
+    tasks = await list_tasks(
+        pagination=Pagination(),
+        tag="OpenML100",
+        status=TaskStatusFilter.ALL,
+        expdb=expdb_test,
+    )
@@
 async def test_list_tasks_filter_data_id(expdb_test: AsyncConnection) -> None:
     """Filter by data_id returns only tasks that use that dataset."""
     data_id = 10
-    tasks = await list_tasks(pagination=Pagination(), data_id=[data_id], expdb=expdb_test)
+    tasks = await list_tasks(
+        pagination=Pagination(),
+        data_id=[data_id],
+        status=TaskStatusFilter.ALL,
+        expdb=expdb_test,
+    )
@@
 async def test_list_tasks_filter_data_name(expdb_test: AsyncConnection) -> None:
     """Filter by data_name returns only tasks whose dataset matches."""
-    tasks = await list_tasks(pagination=Pagination(), data_name="mfeat-pixel", expdb=expdb_test)
+    tasks = await list_tasks(
+        pagination=Pagination(),
+        data_name="mfeat-pixel",
+        status=TaskStatusFilter.ALL,
+        expdb=expdb_test,
+    )

Also applies to: 177-189

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/task_list_test.py` around lines 150 - 160, The tests
test_list_tasks_filter_type and test_list_tasks_filter_tag (and the other
remaining direct filter tests) rely on the default TaskStatusFilter.ACTIVE; to
isolate the filtering under test pass TaskStatusFilter.ALL explicitly when
calling list_tasks (same approach as test_list_tasks_filter_task_id) so the
assertions don't break if fixture statuses change—update calls to
list_tasks(..., task_type_id=..., pagination=Pagination(), expdb=expdb_test,
status=TaskStatusFilter.ALL) and similarly for tag/data_id/data_name tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 238-244: The test functions (e.g.,
test_list_tasks_inputs_are_basic_subset) currently only validate names inside
the inner loop, so a regression that yields empty task["input"] or
task["quality"] would still pass; update the tests that call list_tasks (using
Pagination(...), expdb=expdb_test) to first assert that each sampled task has a
non-empty "input" (and similarly "quality" in the other test) array (e.g.,
assert task["input"] and assert task["quality"]) before iterating over entries
and checking names/types; keep using the same variables (tasks, task, inp) and
the same basic_inputs/basic_qualities sets to perform the existing membership
assertions after the non-empty check.
- Around line 170-174: The test currently collapses tasks to a set (returned_ids
= {t["task_id"] for t in tasks}), which hides duplicate tasks; change the
assertion to compare ordered lists instead: extract the list of task IDs from
tasks (e.g., [t["task_id"] for t in tasks]) and assert that sorted(actual_ids)
== sorted(expected_ids) (where expected_ids is the ids variable) so duplicates
and ordering regressions are detected; update the assertion around the
list_tasks call accordingly.
- Around line 42-71: The test test_list_tasks_negative_pagination_safely_clamped
currently only asserts row count for the negative-offset case so it could pass
even if the server returned the wrong page; update the negative-offset branch to
make an explicit second request with pagination {"limit": limit, "offset": 0} to
/tasks/list using the same py_api client and compare that the returned task_id
list from the negative-offset response equals the task_id list from the offset=0
response (use the JSON bodies from both responses to extract task_id arrays)
instead of only checking len(...) <= expected_max_results.

---

Duplicate comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 150-160: The tests test_list_tasks_filter_type and
test_list_tasks_filter_tag (and the other remaining direct filter tests) rely on
the default TaskStatusFilter.ACTIVE; to isolate the filtering under test pass
TaskStatusFilter.ALL explicitly when calling list_tasks (same approach as
test_list_tasks_filter_task_id) so the assertions don't break if fixture
statuses change—update calls to list_tasks(..., task_type_id=...,
pagination=Pagination(), expdb=expdb_test, status=TaskStatusFilter.ALL) and
similarly for tag/data_id/data_name tests.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 920de864-fc17-451b-aef8-d65460607a4c

📥 Commits

Reviewing files that changed from the base of the PR and between eb9c646 and d59466e.

📒 Files selected for processing (1)
  • tests/routers/openml/task_list_test.py

@saathviksheerla saathviksheerla changed the title refactor functional tests tests: refactor task-list-tests (direct functional calls + happy path) Apr 1, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/routers/openml/task_list_test.py (2)

213-216: Assert the requested page slice, not just the upper bound.

len(tasks) <= limit doesn't prove the offset=5 case returned the second page; it still passes if the implementation returns an under-filled first-page slice. Comparing IDs against the corresponding slice from an offset=0 baseline would make this test match its docstring.

♻️ Suggested tightening
 async def test_list_tasks_pagination(limit: int, offset: int, expdb_test: AsyncConnection) -> None:
     """Pagination limit and offset are respected."""
     tasks = await list_tasks(pagination=Pagination(limit=limit, offset=offset), expdb=expdb_test)
     assert len(tasks) <= limit
+    baseline = await list_tasks(
+        pagination=Pagination(limit=limit + offset, offset=0),
+        expdb=expdb_test,
+    )
+    assert [task["task_id"] for task in tasks] == [
+        task["task_id"] for task in baseline
+    ][offset : offset + limit]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/task_list_test.py` around lines 213 - 216, The test
currently only asserts len(tasks) <= limit which doesn't verify offset behavior;
update test_list_tasks_pagination to compute a baseline page by calling
list_tasks(pagination=Pagination(limit=limit, offset=0), expdb=expdb_test), then
build the expected slice using offset and limit (e.g.,
baseline[offset:offset+limit]) and assert that the returned tasks from
list_tasks(pagination=Pagination(limit=limit, offset=offset), expdb=expdb_test)
have the same IDs/order as that expected slice; use the list_tasks and
Pagination symbols to locate the calls and compare identifiers rather than only
length.

57-60: Correct the NoResultsError code in this docstring.

Line 59 says 482, but this path maps NoResultsError to HTTP 404 and legacy code 372. Keeping the wrong code here will make failure triage misleading.

✏️ Suggested wording
-    A limit clamped to 0 returns a 482 NoResultsError (404 Not Found).
+    A limit clamped to 0 raises NoResultsError, which the API maps to HTTP 404.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/task_list_test.py` around lines 57 - 60, Update the
docstring under the "Negative pagination values..." test to correct the HTTP
code for NoResultsError: replace the incorrect "482" with the actual HTTP
mapping "404" and optionally note the legacy code "372" so it reads like "404
NoResultsError (legacy 372)"; locate the sentence referencing NoResultsError in
tests/routers/openml/task_list_test.py and edit that docstring accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 213-216: The test currently only asserts len(tasks) <= limit which
doesn't verify offset behavior; update test_list_tasks_pagination to compute a
baseline page by calling list_tasks(pagination=Pagination(limit=limit,
offset=0), expdb=expdb_test), then build the expected slice using offset and
limit (e.g., baseline[offset:offset+limit]) and assert that the returned tasks
from list_tasks(pagination=Pagination(limit=limit, offset=offset),
expdb=expdb_test) have the same IDs/order as that expected slice; use the
list_tasks and Pagination symbols to locate the calls and compare identifiers
rather than only length.
- Around line 57-60: Update the docstring under the "Negative pagination
values..." test to correct the HTTP code for NoResultsError: replace the
incorrect "482" with the actual HTTP mapping "404" and optionally note the
legacy code "372" so it reads like "404 NoResultsError (legacy 372)"; locate the
sentence referencing NoResultsError in tests/routers/openml/task_list_test.py
and edit that docstring accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 1c93713d-6602-4c2e-a3f5-3c5ff3a4858d

📥 Commits

Reviewing files that changed from the base of the PR and between d59466e and 17fc532.

📒 Files selected for processing (1)
  • tests/routers/openml/task_list_test.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 281-284: Update the test docstring to reflect the direct-call
behavior: replace the HTTP wording "return 404" with a description that the call
to list_tasks(...) raises NoResultsError when filters match nothing (e.g.,
"Filters matching nothing raise NoResultsError"). Keep reference to the test
function name test_list_tasks_no_results and the call signature using
Pagination(), expdb_test, and payload to ensure clarity.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 71322a7c-13b6-4ed2-8a2f-136770f92bb8

📥 Commits

Reviewing files that changed from the base of the PR and between 17fc532 and 62dec6d.

📒 Files selected for processing (1)
  • tests/routers/openml/task_list_test.py

Comment on lines +281 to +284
async def test_list_tasks_no_results(payload: dict[str, Any], expdb_test: AsyncConnection) -> None:
"""Filters matching nothing return 404 NoResultsError."""
response = await py_api.post("/tasks/list", json=payload)
assert response.status_code == HTTPStatus.NOT_FOUND
assert response.headers["content-type"] == "application/problem+json"
error = response.json()
assert error["type"] == NoResultsError.uri


@pytest.mark.parametrize(
"value",
["1...2", "abc"],
ids=["triple_dot", "non_numeric"],
)
async def test_list_tasks_invalid_range(value: str, py_api: httpx.AsyncClient) -> None:
"""Invalid number_instances format returns 422 Unprocessable Entity."""
response = await py_api.post("/tasks/list", json={"number_instances": value})
assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY
with pytest.raises(NoResultsError):
await list_tasks(pagination=Pagination(), expdb=expdb_test, **payload)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring still describes HTTP behavior.

This test now calls list_tasks(...) directly and asserts NoResultsError, so “return 404” is no longer accurate. Please update the wording to match the direct-call path.

✏️ Suggested wording
-    """Filters matching nothing return 404 NoResultsError."""
+    """Filters matching nothing raise NoResultsError on direct calls."""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/task_list_test.py` around lines 281 - 284, Update the
test docstring to reflect the direct-call behavior: replace the HTTP wording
"return 404" with a description that the call to list_tasks(...) raises
NoResultsError when filters match nothing (e.g., "Filters matching nothing raise
NoResultsError"). Keep reference to the test function name
test_list_tasks_no_results and the call signature using Pagination(),
expdb_test, and payload to ensure clarity.

@saathviksheerla
Copy link
Copy Markdown
Contributor Author

Hi @PGijsbers , the PR is ready for your review.
Please let me know your thoughts (if any feedback).
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[maintenance] Refactor task list tests with direct method calls

1 participant